perm filename BLOCKS.226[F75,JMC] blob sn#756719 filedate 1984-06-06 generic text, type T, neo UTF8
PROBLEMS WITH THE BLOCKS WORLD


	The  object  of  this note is to outline some of the problems
that AI must solve by referring them  to  the  much  studied  "blocks
world"  of  (Winograd  197x).   We  shall start with the most general
blocks problem and simplify it step-by-step until we reach the  level
of problem that has actually been solved.

	Suppose  that  a group of people are to build a house jointly
sharing in the investment, the work, and the proceeds. We would  like
to program a robot and send it forth to take part in the enterprise.

	#.  We must start by providing the robot with motivation, and
suppose that it wishes to spend not more than 6 months at the job and
invest  not  more than $10,000 and when the house is sold to maximize
its rate of return on its total investment counting any labor it puts
in  at  $20  per  hour.   (It  is a hard working robot and values its
labor).  It  must  strike  an  appropriate  bargain  with  its  human
collaborators.

	There  is  no  difficulty in programming the robot to compute
the return on investment given an agreed  share,  the  price  of  the
house  and  its inputs in money and labor. However, we don't know the
rules that would allow it  to  compute  the  necessary  probabilities
given  the  information  that  is available in the real world.  If we
could limit the factors that are to be taken into  account  we  could
probably  concoct  a  rule  that would be no worse than present human
performance and perhaps better.  However, our program  would  require
prepared  inputs  and it would have no way of taking into account new
information such as the state of union  contract  negotiations  of  a
supplier.

	The  first  difficulty  that  we  shall consider is forming a
model of the motivations of the  robot's  collaborators  so  that  it
could come to an agreement with them.

	(Let  me point out that there are two kinds of models one can
form  of  other  people,  each  of  which  is  appropriate  in   some
circumstances.  The simpler kind of model regards the other person as
an automaton that responds to certain stimuli with  certain  actions.
This stimulus response relation may be imperfectly known.  The second
kind of model ascribes goals and/or a utility function to  the  other
being.   In  that  case  one  can  ask  what actions he believes will
achieve his goals or what actions on our part will benefit it.  Using
the  automaton model should not be regarded pejoratively; it is often
appropriate.  In my role as a  classroom  teacher,  I  prefer  to  be
regarded as an automaton that will reward good work appropriately and
will answer questions appropriately.  I  don't  especially  want  the
students  speculating  about  my  inner  motivations.  In other human
relationships,  I  prefer  having  my  motivations  and  my   welfare
considered.)

	At  present,  no-one  has  built  into  a  computer program a
reasonable model of either kind for human behavior. Therefore,  let's
give  up  letting our robot be an equal partner in the building co-op
and make it simply a servant.

	#. In his role as servant, the robot must communicate with
the other workers.  We don't know how to program free communication
in natural language, so let's give that up.

	We must now devise an artificial language in which the
robot and its collaborators can express communications appropriate
to the job.  Since the robot is a servant it is given its goals
by the users.